高斯流程是许多灵活的统计和机器学习模型的关键组成部分。但是,由于需要倒转和存储完整的协方差矩阵,它们表现出立方计算的复杂性和高内存约束。为了解决这个问题,已经考虑了高斯流程专家的混合物,其中数据点被分配给独立专家,从而通过允许基于较小的局部协方差矩阵来降低复杂性。此外,高斯流程专家的混合物大大富含模型的灵活性,从而允许诸如非平稳性,异方差和不连续性等行为。在这项工作中,我们基于嵌套的蒙特卡洛采样器构建了一种新颖的推理方法,以同时推断门控网络和高斯工艺专家参数。与重要性采样相比,这大大改善了推断,尤其是在固定高斯流程不合适的情况下,同时仍然完全平行。
translated by 谷歌翻译
我们分析了投影随机梯度下降的行为,该梯度下降的重点是最佳的约束集边界,并且梯度不会以最佳限制消失。在这里,迭代可能会在每个步骤中取得反对目标的进展。当这种噪声上的适当力矩条件保持时,我们证明了与约束随机梯度下降最佳的收敛速率将有所不同,并且通常比无约束的随机梯度下降算法更快。我们的结果认为,最佳的浓度是指数分布的,而不是正态分布,这通常确定在不受约束的情况下的限制收敛性。我们开发的方法依赖于几何形状的证明。这将Hajek(1982)的马尔可夫链的结果扩展到了随机近似算法的区域。作为示例,我们显示了结果如何适用于线性编程和表格加固的学习。
translated by 谷歌翻译
在本文中,我们考虑了贝叶斯(DNNS),尤其是Trace-Class神经网络(TNN)先验,贝叶斯的推论是Sell等人提出的。 [39]。在推理问题的背景下,这种先验是对经典体系结构的更强大替代品。对于这项工作,我们为此类模型开发了多级蒙特卡洛(MLMC)方法。 MLMC是一种流行的差异技术,在贝叶斯统计和不确定性定量中具有特殊应用。我们展示了在[4]中引入的特定高级MLMC方法如何应用于DNN的贝叶斯推断并从数学上确定,即实现特定平方误差的计算成本,与后验预期相关,可以通过几个减少订单,与更常规的技术。为了验证此类结果,我们提供了许多关于机器学习中产生的模型问题的数值实验。其中包括贝叶斯回归,以及贝叶斯分类和增强学习。
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as "the Bar Exam," as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in "AI?" In this research, we document our experimental evaluation of the performance of OpenAI's `text-davinci-003` model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5's zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5's zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's ranking of responses is also highly-correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.
translated by 谷歌翻译
It is well known that conservative mechanical systems exhibit local oscillatory behaviours due to their elastic and gravitational potentials, which completely characterise these periodic motions together with the inertial properties of the system. The classification of these periodic behaviours and their geometric characterisation are in an on-going secular debate, which recently led to the so-called eigenmanifold theory. The eigenmanifold characterises nonlinear oscillations as a generalisation of linear eigenspaces. With the motivation of performing periodic tasks efficiently, we use tools coming from this theory to construct an optimization problem aimed at inducing desired closed-loop oscillations through a state feedback law. We solve the constructed optimization problem via gradient-descent methods involving neural networks. Extensive simulations show the validity of the approach.
translated by 谷歌翻译
In this paper, we propose an effective unified control law for accurately tracking agile trajectories for lifting-wing quadcopters with different installation angles, which have the capability of vertical takeoff and landing (VTOL) as well as high-speed cruise flight. First, we derive a differential flatness transform for the lifting-wing dynamics with a nonlinear model under coordinated turn condition. To increase the tracking performance on agile trajectories, the proposed controller incorporates the state and input variables calculated from differential flatness as feedforward. In particular, the jerk, the 3-order derivative of the trajectory, is converted into angular velocity as a feedforward item, which significantly improves the system bandwidth. At the same time, feedback and feedforward outputs are combined to deal with external disturbances and model mismatch. The control algorithm has been thoroughly evaluated in the outdoor flight tests, which show that it can achieve accurate trajectory tracking.
translated by 谷歌翻译
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning. In several aspects, PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined. The learning procedure of PLD solves the optimization problem related to the search for rules (called probabilistic laws), which have a minimal length and relatively high probability. At inference, ensembles of these rules are used for prediction. Probabilistic laws are human-readable and PLD based models are transparent and inherently interpretable. Applications of PLD include classification/clusterization/regression tasks, as well as time series analysis/anomaly detection and adaptive (robotic) control. In this paper, we outline the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
translated by 谷歌翻译
Consider $n$ points independently sampled from a density $p$ of class $\mathcal{C}^2$ on a smooth compact $d$-dimensional sub-manifold $\mathcal{M}$ of $\mathbb{R}^m$, and consider the generator of a random walk visiting these points according to a transition kernel $K$. We study the almost sure uniform convergence of this operator to the diffusive Laplace-Beltrami operator when $n$ tends to infinity. This work extends known results of the past 15 years. In particular, our result does not require the kernel $K$ to be continuous, which covers the cases of walks exploring $k$NN-random and geometric graphs, and convergence rates are given. The distance between the random walk generator and the limiting operator is separated into several terms: a statistical term, related to the law of large numbers, is treated with concentration tools and an approximation term that we control with tools from differential geometry. The convergence of $k$NN Laplacians is detailed.
translated by 谷歌翻译